我们重新审视汉密尔顿随机微分方程(SDES)的理论属性,为贝叶斯后部采样,我们研究了来自数值SDE仿真的两种类型的误差:在数据附带的上下文中,离散化误差和由于噪声渐变估计而导致的错误。我们的主要结果是对迷你批次通过差分操作员分裂镜片影响的新颖分析,修改了先前的文献结果。Hamiltonian SDE的随机分量与梯度噪声分离,我们没有常规假设。这导致识别收敛瓶颈:在考虑迷你批次时,最佳可实现的错误率是$ \ mathcal {o}(\ eta ^ 2)$,带有$ \ eta $是集成器步长。我们的理论结果得到了贝叶斯神经网络各种回归和分类任务的实证研究。
translated by 谷歌翻译
Automatic differentiation (AD) is a technique for computing the derivative of a function represented by a program. This technique is considered as the de-facto standard for computing the differentiation in many machine learning and optimisation software tools. Despite the practicality of this technique, the performance of the differentiated programs, especially for functional languages and in the presence of vectors, is suboptimal. We present an AD system for a higher-order functional array-processing language. The core functional language underlying this system simultaneously supports both source-to-source forward-mode AD and global optimisations such as loop transformations. In combination, gradient computation with forward-mode AD can be as efficient as reverse mode, and the Jacobian matrices required for numerical algorithms such as Gauss-Newton and Levenberg-Marquardt can be efficiently computed.
translated by 谷歌翻译
A methodology is proposed, which addresses the caveat that line-of-sight emission spectroscopy presents in that it cannot provide spatially resolved temperature measurements in nonhomogeneous temperature fields. The aim of this research is to explore the use of data-driven models in measuring temperature distributions in a spatially resolved manner using emission spectroscopy data. Two categories of data-driven methods are analyzed: (i) Feature engineering and classical machine learning algorithms, and (ii) end-to-end convolutional neural networks (CNN). In total, combinations of fifteen feature groups and fifteen classical machine learning models, and eleven CNN models are considered and their performances explored. The results indicate that the combination of feature engineering and machine learning provides better performance than the direct use of CNN. Notably, feature engineering which is comprised of physics-guided transformation, signal representation-based feature extraction and Principal Component Analysis is found to be the most effective. Moreover, it is shown that when using the extracted features, the ensemble-based, light blender learning model offers the best performance with RMSE, RE, RRMSE and R values of 64.3, 0.017, 0.025 and 0.994, respectively. The proposed method, based on feature engineering and the light blender model, is capable of measuring nonuniform temperature distributions from low-resolution spectra, even when the species concentration distribution in the gas mixtures is unknown.
translated by 谷歌翻译
The combination of artist-curated scans, and deep implicit functions (IF), is enabling the creation of detailed, clothed, 3D humans from images. However, existing methods are far from perfect. IF-based methods recover free-form geometry but produce disembodied limbs or degenerate shapes for unseen poses or clothes. To increase robustness for these cases, existing work uses an explicit parametric body model to constrain surface reconstruction, but this limits the recovery of free-form surfaces such as loose clothing that deviates from the body. What we want is a method that combines the best properties of implicit and explicit methods. To this end, we make two key observations: (1) current networks are better at inferring detailed 2D maps than full-3D surfaces, and (2) a parametric model can be seen as a "canvas" for stitching together detailed surface patches. ECON infers high-fidelity 3D humans even in loose clothes and challenging poses, while having realistic faces and fingers. This goes beyond previous methods. Quantitative, evaluation of the CAPE and Renderpeople datasets shows that ECON is more accurate than the state of the art. Perceptual studies also show that ECON's perceived realism is better by a large margin. Code and models are available for research purposes at https://xiuyuliang.cn/econ
translated by 谷歌翻译
Dimensionality reduction has become an important research topic as demand for interpreting high-dimensional datasets has been increasing rapidly in recent years. There have been many dimensionality reduction methods with good performance in preserving the overall relationship among data points when mapping them to a lower-dimensional space. However, these existing methods fail to incorporate the difference in importance among features. To address this problem, we propose a novel meta-method, DimenFix, which can be operated upon any base dimensionality reduction method that involves a gradient-descent-like process. By allowing users to define the importance of different features, which is considered in dimensionality reduction, DimenFix creates new possibilities to visualize and understand a given dataset. Meanwhile, DimenFix does not increase the time cost or reduce the quality of dimensionality reduction with respect to the base dimensionality reduction used.
translated by 谷歌翻译
As a result of the ever increasing complexity of configuring and fine-tuning machine learning models, the field of automated machine learning (AutoML) has emerged over the past decade. However, software implementations like Auto-WEKA and Auto-sklearn typically focus on classical machine learning (ML) tasks such as classification and regression. Our work can be seen as the first attempt at offering a single AutoML framework for most problem settings that fall under the umbrella of multi-target prediction, which includes popular ML settings such as multi-label classification, multivariate regression, multi-task learning, dyadic prediction, matrix completion, and zero-shot learning. Automated problem selection and model configuration are achieved by extending DeepMTP, a general deep learning framework for MTP problem settings, with popular hyperparameter optimization (HPO) methods. Our extensive benchmarking across different datasets and MTP problem settings identifies cases where specific HPO methods outperform others.
translated by 谷歌翻译
Climate change is expected to aggravate wildfire activity through the exacerbation of fire weather. Improving our capabilities to anticipate wildfires on a global scale is of uttermost importance for mitigating their negative effects. In this work, we create a global fire dataset and demonstrate a prototype for predicting the presence of global burned areas on a sub-seasonal scale with the use of segmentation deep learning models. Particularly, we present an open-access global analysis-ready datacube, which contains a variety of variables related to the seasonal and sub-seasonal fire drivers (climate, vegetation, oceanic indices, human-related variables), as well as the historical burned areas and wildfire emissions for 2001-2021. We train a deep learning model, which treats global wildfire forecasting as an image segmentation task and skillfully predicts the presence of burned areas 8, 16, 32 and 64 days ahead of time. Our work motivates the use of deep learning for global burned area forecasting and paves the way towards improved anticipation of global wildfire patterns.
translated by 谷歌翻译
The current success of machine learning on image-based combustion monitoring is based on massive data, which is costly even impossible for industrial applications. To address this conflict, we introduce few-shot learning in order to achieve combustion monitoring and classification for the first time. Two algorithms, Siamese Network coupled with k Nearest Neighbors (SN-kNN) and Prototypical Network (PN), were tested. Rather than utilizing solely visible images as discussed in previous studies, we also used Infrared (IR) images. We analyzed the training process, test performance and inference speed of two algorithms on both image formats, and also used t-SNE to visualize learned features. The results demonstrated that both SN-kNN and PN were capable to distinguish flame states from learning with merely 20 images per flame state. The worst performance, which was realized by PN on IR images, still possessed precision, accuracy, recall, and F1-score above 0.95. We showed that visible images demonstrated more substantial differences between classes and presented more consistent patterns inside the class, which made the training speed and model performance better compared to IR images. In contrast, the relatively low quality of IR images made it difficult for PN to extract distinguishable prototypes, which caused relatively weak performance. With the entrire training set supporting classification, SN-kNN performed well with IR images. On the other hand, benefitting from the architecture design, PN has a much faster speed in training and inference than SN-kNN. The presented work analyzed the characteristics of both algorithms and image formats for the first time, thus providing guidance for their future utilization in combustion monitoring tasks.
translated by 谷歌翻译
我们建议使用两层机器学习模型的部署来防止对抗性攻击。第一层确定数据是否被篡改,而第二层解决了域特异性问题。我们探索三组功能和三个数据集变体来训练机器学习模型。我们的结果表明,聚类算法实现了有希望的结果。特别是,我们认为通过将DBSCAN算法应用于图像和白色参考图像之间计算的结构化结构相似性指数测量方法获得了最佳结果。
translated by 谷歌翻译
图像分类的深卷卷神经网络(CNN)依次交替交替进行卷积和下采样操作,例如合并层或陷入困境的卷积,从而导致较低的分辨率特征网络越深。这些降采样操作节省了计算资源,并在下一层提供了一些翻译不变性以及更大的接收领域。但是,这样做的固有副作用是,在网络深端产生的高级特征始终以低分辨率特征图捕获。逆也是如此,因为浅层总是包含小规模的特征。在生物医学图像分析中,工程师通常负责对仅包含有限信息的非常小的图像贴片进行分类。从本质上讲,这些补丁甚至可能不包含对象,而分类取决于图像纹理中未知量表的微妙基础模式的检测。在这些情况下,每一个信息都是有价值的。因此,重要的是要提取最大数量的信息功能。在这些考虑因素的推动下,我们引入了一种新的CNN体​​系结构,该体系结构可通过利用跳过连接以及连续的收缩和特征图的扩展来保留深,中间和浅层层的多尺度特征。使用来自胰腺导管腺癌(PDAC)CT扫描的非常低分辨率斑块的数据集,我们证明我们的网络可以超越最新模型的当前状态。
translated by 谷歌翻译